63 research outputs found

    Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions

    Full text link
    The rapid adoption of artificial intelligence (AI) necessitates careful analysis of its ethical implications. In addressing ethics and fairness implications, it is important to examine the whole range of ethically relevant features rather than looking at individual agents alone. This can be accomplished by shifting perspective to the systems in which agents are embedded, which is encapsulated in the macro ethics of sociotechnical systems (STS). Through the lens of macro ethics, the governance of systems - which is where participants try to promote outcomes and norms which reflect their values - is key. However, multiple-user social dilemmas arise in an STS when stakeholders of the STS have different value preferences or when norms in the STS conflict. To develop equitable governance which meets the needs of different stakeholders, and resolve these dilemmas in satisfactory ways with a higher goal of fairness, we need to integrate a variety of normative ethical principles in reasoning. Normative ethical principles are understood as operationalizable rules inferred from philosophical theories. A taxonomy of ethical principles is thus beneficial to enable practitioners to utilise them in reasoning. This work develops a taxonomy of normative ethical principles which can be operationalized in the governance of STS. We identify an array of ethical principles, with 25 nodes on the taxonomy tree. We describe the ways in which each principle has previously been operationalized, and suggest how the operationalization of principles may be applied to the macro ethics of STS. We further explain potential difficulties that may arise with each principle. We envision this taxonomy will facilitate the development of methodologies to incorporate ethical principles in reasoning capacities for governing equitable STS

    Desen: Specification of Sociotechnical Systems via Patterns of Regulation and Control

    Get PDF
    We address the problem of engineering a sociotechnical system (STS) with respect to its stakeholders’ requirements. We motivate a two-tier STS conception comprising a technical tier that provides control mechanisms and describes what actions are allowed by the software components, and a social tier that characterizes the stakeholders’ expectations of each other in terms of norms. We adopt agents as computational entities, each representing a different stakeholder. Unlike previous approaches, our framework, Desen, incorporates the social dimension into the formal verification process. Thus, Desen supports agents potentially violating applicable norms—a consequence of their autonomy. In addition to requirements verification, Desen supports refinement of STS specifications via design patterns to meet stated requirements. We evaluate Desen at three levels. We illustrate how Desen carries out refinement via the application of patterns on a hospital emergency scenario. We show via a human-subject study that a design process based on our patterns is helpful for participants who are inexperienced in conceptual modeling and norms. We provide an agent-based environment to simulate the hospital emergency scenario to compare STS specifications (including participant solutions from the human-subject study) with metrics indicating social welfare and norm compliance, and other domain dependent metrics

    Understanding dynamics of polarization via multiagent social simulation

    Get PDF
    It is widely recognized that the Web contributes to user polarization, and such polarization affects not just politics but also peoples’ stances about public health, such as vaccination. Understanding polarization in social networks is challenging because it depends not only on user attitudes but also their interactions and exposure to information. We adopt Social Judgment Theory to operationalize attitude shift and model user behavior based on empirical evidence from past studies. We design a social simulation to analyze how content sharing affects user satisfaction and polarization in a social network. We investigate the influence of varying tolerance in users and selectively exposing users to congenial views. We find that (1) higher user tolerance slows down polarization and leads to lower user satisfaction; (2) higher selective exposure leads to higher polarization and lower user reach; and (3) both higher tolerance and higher selective exposure lead to a more homophilic social network

    Enhancing Creativity as Innovation via Asynchronous Crowdwork

    Get PDF
    Synchronous, face-to-face interactions such as brainstorming are considered essential for creative tasks (the old normal). However, face-to-face interactions are difficult to arrange because of the diverse locations and conflicting availability of people - a challenge made more prominent by work-from-home practices during the COVID-19 pandemic (the new normal). In addition, face-to-face interactions are susceptible to cognitive interference. We employ crowdsourcing as an avenue to investigate creativity in asynchronous, online interactions. We choose product ideation, a natural task for the crowd since it requires human insight and creativity into what product features would be novel and useful. We compare the performance of solo crowd workers with asynchronous teams of crowd workers formed without prior coordination. Our findings suggest that, first, crowd teamwork yields fewer but more creative ideas than solo crowdwork. The enhanced team creativity results when Second, cognitive interference, known to inhibit creativity in face-to-face teams, may not be significant in crowd teams. Third, teamwork promotes better achievement emotions for crowd workers. These findings provide a basis for trading off creativity, quantity, and worker happiness in setting up crowdsourcing workflows for product ideation. </p

    Prosocial Norm Emergence in Multiagent Systems

    Get PDF

    Kont: Computing tradeoffs in normative multiagent systems

    Get PDF
    We propose Kont, a formal framework for comparing normative multiagent systems (nMASs) by computing tradeoffs among liveness (something good happens) and safety (nothing bad happens). Safety-focused nMASs restrict agents' actions to avoid undesired enactments. However, such restrictions hinder liveness, particularly in situations such as medical emergencies. We formalize tradeoffs using norms, and develop an approach for understanding to what extent an nMAS promotes liveness or safety. We propose patterns to guide the design of an nMAS with respect to liveness and safety, and prove their correctness. We further quantify liveness and safety using heuristic metrics for an emergency healthcare application. We show that the results of the application corroborate our theoretical development
    • …
    corecore